46 research outputs found

    Adversarial-Aware Deep Learning System based on a Secondary Classical Machine Learning Verification Approach

    Full text link
    Deep learning models have been used in creating various effective image classification applications. However, they are vulnerable to adversarial attacks that seek to misguide the models into predicting incorrect classes. Our study of major adversarial attack models shows that they all specifically target and exploit the neural networking structures in their designs. This understanding makes us develop a hypothesis that most classical machine learning models, such as Random Forest (RF), are immune to adversarial attack models because they do not rely on neural network design at all. Our experimental study of classical machine learning models against popular adversarial attacks supports this hypothesis. Based on this hypothesis, we propose a new adversarial-aware deep learning system by using a classical machine learning model as the secondary verification system to complement the primary deep learning model in image classification. Although the secondary classical machine learning model has less accurate output, it is only used for verification purposes, which does not impact the output accuracy of the primary deep learning model, and at the same time, can effectively detect an adversarial attack when a clear mismatch occurs. Our experiments based on CIFAR-100 dataset show that our proposed approach outperforms current state-of-the-art adversarial defense systems.Comment: 17 pages, 3 figure

    Mutual Information Input Selector and Probabilistic Machine Learning Utilisation for Air Pollution Proxies

    Get PDF
    An air pollutant proxy is a mathematical model that estimates an unobserved air pollutant using other measured variables. The proxy is advantageous to fill missing data in a research campaign or to substitute a real measurement for minimising the cost as well as the operators involved (i.e., virtual sensor). In this paper, we present a generic concept of pollutant proxy development based on an optimised data-driven approach. We propose a mutual information concept to determine the interdependence of different variables and thus select the most correlated inputs. The most relevant variables are selected to be the best proxy inputs, where several metrics and data loss are also involved for guidance. The input selection method determines the used data for training pollutant proxies based on a probabilistic machine learning method. In particular, we use a Bayesian neural network that naturally prevents overfitting and provides confidence intervals around its output prediction. In this way, the prediction uncertainty could be assessed and evaluated. In order to demonstrate the effectiveness of our approach, we test it on an extensive air pollution database to estimate ozone concentration.An air pollutant proxy is a mathematical model that estimates an unobserved air pollutant using other measured variables. The proxy is advantageous to fill missing data in a research campaign or to substitute a real measurement for minimising the cost as well as the operators involved (i.e., virtual sensor). In this paper, we present a generic concept of pollutant proxy development based on an optimised data-driven approach. We propose a mutual information concept to determine the interdependence of different variables and thus select the most correlated inputs. The most relevant variables are selected to be the best proxy inputs, where several metrics and data loss are also involved for guidance. The input selection method determines the used data for training pollutant proxies based on a probabilistic machine learning method. In particular, we use a Bayesian neural network that naturally prevents overfitting and provides confidence intervals around its output prediction. In this way, the prediction uncertainty could be assessed and evaluated. In order to demonstrate the effectiveness of our approach, we test it on an extensive air pollution database to estimate ozone concentration.Peer reviewe

    Mutual Information Input Selector and Probabilistic Machine Learning Utilisation for Air Pollution Proxies

    Get PDF
    An air pollutant proxy is a mathematical model that estimates an unobserved air pollutant using other measured variables. The proxy is advantageous to fill missing data in a research campaign or to substitute a real measurement for minimising the cost as well as the operators involved (i.e., virtual sensor). In this paper, we present a generic concept of pollutant proxy development based on an optimised data-driven approach. We propose a mutual information concept to determine the interdependence of different variables and thus select the most correlated inputs. The most relevant variables are selected to be the best proxy inputs, where several metrics and data loss are also involved for guidance. The input selection method determines the used data for training pollutant proxies based on a probabilistic machine learning method. In particular, we use a Bayesian neural network that naturally prevents overfitting and provides confidence intervals around its output prediction. In this way, the prediction uncertainty could be assessed and evaluated. In order to demonstrate the effectiveness of our approach, we test it on an extensive air pollution database to estimate ozone concentration.An air pollutant proxy is a mathematical model that estimates an unobserved air pollutant using other measured variables. The proxy is advantageous to fill missing data in a research campaign or to substitute a real measurement for minimising the cost as well as the operators involved (i.e., virtual sensor). In this paper, we present a generic concept of pollutant proxy development based on an optimised data-driven approach. We propose a mutual information concept to determine the interdependence of different variables and thus select the most correlated inputs. The most relevant variables are selected to be the best proxy inputs, where several metrics and data loss are also involved for guidance. The input selection method determines the used data for training pollutant proxies based on a probabilistic machine learning method. In particular, we use a Bayesian neural network that naturally prevents overfitting and provides confidence intervals around its output prediction. In this way, the prediction uncertainty could be assessed and evaluated. In order to demonstrate the effectiveness of our approach, we test it on an extensive air pollution database to estimate ozone concentration.Peer reviewe

    Predicting the Level of Respiratory Support in COVID-19 Patients Using Machine Learning

    Get PDF
    In this paper, a machine learning-based system for the prediction of the required level of respiratory support in COVID-19 patients is proposed. The level of respiratory support is divided into three classes: class 0 which refers to minimal support, class 1 which refers to non-invasive support, and class 2 which refers to invasive support. A two-stage classification system is built. First, the classification between class 0 and others is performed. Then, the classification between class 1 and class 2 is performed. The system is built using a dataset collected retrospectively from 3491 patients admitted to tertiary care hospitals at the University of Louisville Medical Center. The use of the feature selection method based on analysis of variance is demonstrated in the paper. Furthermore, a dimensionality reduction method called principal component analysis is used. XGBoost classifier achieves the best classification accuracy (84%) in the first stage. It also achieved optimal performance in the second stage, with a classification accuracy of 83%

    A deep learning-based approach for automatic segmentation and quantification of the left ventricle from cardiac cine MR images

    Get PDF
    © 2020 Elsevier Ltd Cardiac MRI has been widely used for noninvasive assessment of cardiac anatomy and function as well as heart diagnosis. The estimation of physiological heart parameters for heart diagnosis essentially require accurate segmentation of the Left ventricle (LV) from cardiac MRI. Therefore, we propose a novel deep learning approach for the automated segmentation and quantification of the LV from cardiac cine MR images. We aim to achieve lower errors for the estimated heart parameters compared to the previous studies by proposing a novel deep learning segmentation method. Our framework starts by an accurate localization of the LV blood pool center-point using a fully convolutional neural network (FCN) architecture called FCN1. Then, a region of interest (ROI) that contains the LV is extracted from all heart sections. The extracted ROIs are used for the segmentation of LV cavity and myocardium via a novel FCN architecture called FCN2. The FCN2 network has several bottleneck layers and uses less memory footprint than conventional architectures such as U-net. Furthermore, a new loss function called radial loss that minimizes the distance between the predicted and true contours of the LV is introduced into our model. Following myocardial segmentation, functional and mass parameters of the LV are estimated. Automated Cardiac Diagnosis Challenge (ACDC-2017) dataset was used to validate our framework, which gave better segmentation, accurate estimation of cardiac parameters, and produced less error compared to other methods applied on the same dataset. Furthermore, we showed that our segmentation approach generalizes well across different datasets by testing its performance on a locally acquired dataset. To sum up, we propose a deep learning approach that can be translated into a clinical tool for heart diagnosis

    Physicochemical characterization of natural hydroxyapatite/ cellulose composite

    Get PDF
    The natural hydroxyapatite (HAp, activated at different temperatures)/ cellulose composites have been prepared by usingsonication method to improve the physical properties of the cellulose fibre. The molecular level interaction and the physicalproperties of the hydroxyapatite/cellulose composite are examined using FTIR, X-ray diffraction, SEM, and thermalanalysis. The absorption bands at around 660 cm1 confirm the O–P–O bending vibration in the HAp/cellulose composites.There is a difference in the d-spacing of the HAp /cellulose composite, indicating that the HAp is reactive towards cellulose.SEM indicates that HAp could penetrate the cellulose network structure to form particles that is helpful to improve themechanical properties of the cellulose. The porosities of HAp/cellulose composites decrease, and their compressive strengthincrease as compared to those of cellulose. Thermogravimetric analysis confirms the highest thermal stability of theprepared composites

    An improved model to estimate trapping parameters in polymeric materials and its application on normal and aged low-density polyethylenes

    Full text link
    Trapping parameters can be considered as one of the important attributes to describe polymeric materials. In the present paper, a more accurate charge dynamics model has been developed, which takes account of charge dynamics in both volts-on and off stage into simulation. By fitting with measured charge data with the highest R-square value, trapping parameters together with injection barrier of both normal and aged low-density polyethylene samples were estimated using the improved model. The results show that, after long-term ageing process, the injection barriers of both electrons and holes is lowered, overall trap depth is shallower, and trap density becomes much greater. Additionally, the changes in parameters for electrons are more sensitive than those of holes after ageing

    A Predictive Model for Steady State Ozone Concentration at an Urban-Coastal Site

    Get PDF
    Ground level ozone (O3) plays an important role in controlling the oxidation budget in the boundary layer and thus affects the environment and causes severe health disorders. Ozone gas, being one of the well-known greenhouse gases, although present in small quantities, contributes to global warming. In this study, we present a predictive model for the steady-state ozone concentrations during daytime (13:00–17:00) and nighttime (01:00–05:00) at an urban coastal site. The model is based on a modified approach of the null cycle of O3 and NOx and was evaluated against a one-year data-base of O3 and nitrogen oxides (NO and NO2) measured at an urban coastal site in Jeddah, on the west coast of Saudi Arabia. The model for daytime concentrations was found to be linearly dependent on the concentration ratio of NO2 to NO whereas that for the nighttime period was suggested to be inversely proportional to NO2 concentrations. Knowing that reactions involved in tropospheric O3 formation are very complex, this proposed model provides reasonable predictions for the daytime and nighttime concentrations. Since the current description of the model is solely based on the null cycle of O3 and NOx, other precursors could be considered in future development of this model. This study will serve as basis for future studies that might introduce informing strategies to control ground level O3 concentrations, as well as its precursors’ emissions

    A Predictive Model for Steady State Ozone Concentration at an Urban-Coastal Site

    Get PDF
    Ground level ozone (O3) plays an important role in controlling the oxidation budget in the boundary layer and thus affects the environment and causes severe health disorders. Ozone gas, being one of the well-known greenhouse gases, although present in small quantities, contributes to global warming. In this study, we present a predictive model for the steady-state ozone concentrations during daytime (13:00–17:00) and nighttime (01:00–05:00) at an urban coastal site. The model is based on a modified approach of the null cycle of O3 and NOx and was evaluated against a one-year data-base of O3 and nitrogen oxides (NO and NO2) measured at an urban coastal site in Jeddah, on the west coast of Saudi Arabia. The model for daytime concentrations was found to be linearly dependent on the concentration ratio of NO2 to NO whereas that for the nighttime period was suggested to be inversely proportional to NO2 concentrations. Knowing that reactions involved in tropospheric O3 formation are very complex, this proposed model provides reasonable predictions for the daytime and nighttime concentrations. Since the current description of the model is solely based on the null cycle of O3 and NOx, other precursors could be considered in future development of this model. This study will serve as basis for future studies that might introduce informing strategies to control ground level O3 concentrations, as well as its precursors’ emissions
    corecore